Attention, Perception, & Psychophysics
○ Springer Science and Business Media LLC
Preprints posted in the last 30 days, ranked by how well they match Attention, Perception, & Psychophysics's content profile, based on 17 papers previously published here. The average preprint has a 0.00% match score for this journal, so anything above that is already an above-average fit.
Figarola, V.; Liang, W.; Luthra, S.; Parker, E.; Winn, M.; Brown, C.; Shinn-Cunningham, B. G.
Show abstract
Listeners face many challenges when trying to maintain attention to a target source in everyday settings; for instance, reverberation distorts acoustic cues and interruptions capture attention. However, little is known about how these challenges affect the ability to maintain selective attention. Here, we measured syllable recall accuracy and pupil dilation during a spatial selective attention task that was sometimes disrupted. Participants heard two competing, temporally interleaved syllable streams presented in pseudo-anechoic or reverberant environments. On randomly selected trials, a sudden interruption occurred mid-sequence. Compared to anechoic trials, reverberant performance was worse overall, and the interrupter disrupted performance. In uninterrupted trials, reverberation reduced peak pupil dilation both when it was consistent across all stimuli in a block and when it was randomized trial to trial, suggesting temporal smearing reduced clarity of the scene and the salience of events in the ongoing streams. Pupil dilations in response to interruptions indicated perceptual salience was strong across reverberant and anechoic conditions. Specifically, baseline pupil size before trials did not vary across room conditions, and mixing or blocking of trials (altering stimulus expectations) had no impact on pupillary responses. Together, these findings highlight that stimulus salience drives cognitive load more strongly than does task performance.
Engeser, M.; Babaei, N.; Kaiser, D.
Show abstract
Each individual person looks at natural scenes in their own unique way, resulting in a distinct perceptual experience of the world. However, little is known about why such differences in gaze emerge. Here, we test the hypothesis that idiosyncrasies in gaze behavior are predicted by inter-subject variations in internal models--expectations about how scenes typically look. In two experiments, we first characterized participants personal internal models by asking them to draw typical bathroom and kitchen scenes. Individual differences in these drawings were quantified using an objective deep learning pipeline and, in turn, related to individual differences in gaze behavior. In Experiment 1, where participants freely viewed a set of kitchen and bathroom photographs, inter-subject similarities in internal models did not predict inter-subject similarities in gaze. In Experiment 2, we encouraged strategic exploration through gaze-contingent viewing and a memory task. Here, inter-subject similarities in internal models predicted similarities in fixation frequency and the sequence in which different object categories were inspected. These findings suggest that the influence of internal models on visual exploration is stronger under increased sensory uncertainty and when expectation-guided sampling of the environment is encouraged. Together, our results provide new insights into how individual expectations shape gaze behavior and help explain why people differ in how they explore the visual world.
Eccher, E.; Salva, O. R.; Chiandetti, C.; Vallortigara, G.
Show abstract
Numerical abilities are widespread in the animal kingdom and are not exclusive to humans. Domestic chicks (Gallus gallus) have been shown to discriminate numerosities spontaneously, but prior research has focused exclusively on the visual modality. Whether chicks can discriminate numerical information in the auditory domain remains unknown, despite evidence that they can perceive other auditory features such as tone and rhythm. In this study, we investigated spontaneous numerical discrimination in the auditory modality in naive domestic chicks. In Experiment 1, newly-hatched chicks were tested for their ability to discriminate between two auditory sequences differing in numerosity (4 vs. 12 identical sounds), with and without controlling for continuous variables such as duration and total sound amount. Experiment 2 examined chicks filial imprinting responses to familiar or unfamiliar numerosities. Experiment 3 controlled for potential spontaneous preferences for a single longer sound versus a shorter one. Our results showed a preference for the 12-sound sequence only when duration and total sound amount were not matched. When these continuous variables were controlled, no spontaneous numerical preference emerged. Experiment 2 revealed an overall preference for the 12-sound sequence regardless of imprinting conditions, while Experiment 3 confirmed that chicks do not have an inherent preference for longer sounds. These findings suggest that chicks are sensitive to overall magnitude in the auditory domain but do not spontaneously discriminate numerical differences when other continuous variables are held constant. Future studies will explore how specific stimulus features, such as heterogeneity of sounds, influence these preferences.
Zylberberg, A.
Show abstract
The ability to evaluate ones own knowledge states is often studied using paradigms in which participants make a decision and subsequently report their confidence. This structure has motivated hierarchical models in which confidence arises from a metacognitive process, distinct from the decision process itself, that estimates the probability that the choice is correct (Meyniel et al., 2015; Pouget et al., 2016; Fleming and Daw, 2017). Here, we contrast this framework with an alternative based on an intentional architecture (Shadlen et al., 2008). In this account, choice and confidence are determined simultaneously through a multidimensional drift-diffusion process, where each dimension represents one choice-confidence combination (Ratcliff and Starns, 2009, 2013). Choice, response time, and confidence jointly emerge when one of these accumulators reaches a decision bound. To adjudicate between these accounts, we fit both models to behavioral data from two perceptual tasks: a random-dots motion discrimination task with incentivized confidence reports, and a luminance discrimination task without feedback or incentives. The integrated model provided a superior fit for the incentivized motion task, whereas the hierarchical model more accurately captured behavior in the un-incentivized luminance task. These results suggest that confidence does not rely on a single computational mechanism, but rather its implementation may adapt to the specific demands and structure of the task.
Horvath, G.; Rado, J.; Czigler, A.; Fülöp, D.; Sari, Z.; Kovacs, I.; Buzas, P.; Jando, G.
Show abstract
Binocular vision depends on the integration of matching visual features across the two eyes, while conflicting interocular signals can engage active inhibitory processes in the visual system. To investigate the temporal dynamics of these putative inhibitory processes, we examined how transitions between different binocular correlation states influence perceptual detectability and response speed. Using dynamic random-dot correlograms - free of monocular cues and allowing precise interocular manipulation - we presented brief target intervals embedded in longer background sequences. Stimuli varied in binocular correlation: correlated (C) patterns contained identical luminance profiles in both eyes, anticorrelated (A) patterns had inverted luminance dots, and uncorrelated (U) patterns had independent dot arrangements. Across three experiments, we measured (1) the presentation duration threshold required to detect a change in correlation, (2) simple reaction times (RTs) to the same transitions at suprathreshold levels, and (3) psychometric functions across durations for selected transitions. In Experiment 1, A[->]C transitions yielded significantly higher duration thresholds than C[->]A, indicating a suppressive influence associated with prior anticorrelation. In contrast, Experiment 2 showed that A[->]C transitions produced the shortest RTs, while C[->]U transitions were slowest, suggesting a rebound-like facilitation following prior suppression. Experiment 3 confirmed these temporal and contrast dependences, with opposite changes in contrast threshold and reaction times between transitions toward and away from the correlated fusional states. This divergence between perceptual onset and reaction time is consistent with a two-phase account in which binocular anticorrelation is associated with an initial suppressive phase followed by rebound-like facilitation that accelerates responses once the target becomes detectable. These findings are consistent with current models of binocular rivalry and fusion, and provide a temporally resolved behavioral perspective on how inhibitory control in sensory systems may dynamically influence subsequent responsiveness under conditions of perceptual ambiguity.
Proverbio, A. M.; Qin, C.
Show abstract
This study examines the temporal dynamics of expressive piano performance by means of a quantitative analysis of motor timing in an elite pianist, with particular reference to stylistic contrasts between Baroque and Romantic repertoire. In line with kinematic models of expressive timing, which describe musical performance as reflecting principles of biological motion, we examined whether a common temporal structure underlies stylistically divergent executions. Despite marked differences in structural complexity and gesture density, both performances exhibited a shared low-frequency oscillatory pattern ([~]0.36 Hz) in beat-level timing variability. This infra-delta rhythmic modulation is consistent with the presence of an underlying motor timing scaffold and suggests a common temporal organization across expressive behaviors. These findings support the hypothesis that musical performance relies on a rhythmically structured control architecture, potentially shared with other complex motor activities such as speech and locomotion.
Altinordu, N.; Boynton, G. M.; Fine, I.
Show abstract
Color is a prominent feature of visual experience, yet humans can recognize objects easily and accurately from grayscale images. We examined whether color becomes more useful when spatial information is degraded due to blurring. Participants viewed naturalistic scenes in color or grayscale, and reported whether a named target object was present across a range of blur levels that simulated optical defocus from 0-8 diopters. With unblurred images, performance did not differ between color and grayscale conditions, but as blur increased, recognition accuracy declined. Color provided a modest but reliable advantage at higher levels of blur, suggesting that color becomes increasingly useful when optical quality is degraded. We hypothesize that the evolutionary shift towards trichromacy may have been partially driven by the need to compensate for optical degradation due to aging and/or accumulated light exposure.
Kalburge, I.; Dallstream, A.; Josic, K.; Kilpatrick, Z. P.; Ding, L.; Gold, J. I.
Show abstract
Decisions based on evidence accumulated over time require rules governing when to end the accumulation process and commit to a choice. These rules control inherent trade-offs between decision speed and accuracy, which require careful balance to maximize quantities that depend on both like reward rate. We previously showed that, to maximize reward rate, normative decision rules adapt to changing task conditions (Barendregt et al., 2022). Here we used a novel task to examine whether and how people use adaptive rules for individual decisions under a variety of conditions, including changes in decision outcomes across trials and changes in evidence quality both across and within trials. We found that the participants tended to use rules that adjusted, at least partially, to predictable changes in task conditions to improve reward rate, consistent with a rationally bounded implementation of normative principles. These findings help inform our understanding of the extent and limits of flexible decision formation in the brain.
Augsten, M.-L.; Lindenbeck, M. J.; Laback, B.
Show abstract
Cochlear implant (CI) users typically experience difficulties perceiving musical harmony due to a restricted spectro-temporal resolution at the electrode-nerve interface, resulting in limited pitch perception. We investigated how stimulus parameters affect discrimination of complex-tone triads (three-voice chords), aiming to identify conditions that maximize perceptual sensitivity. Six post-lingually deafened CI listeners completed a same/different task with harmonic complex tones, while spectral complexity, voice(s) containing a pitch change, and temporal synchrony (simultaneous vs. sequential triad presentation) were manipulated. CI listeners discriminated harmonically relevant one-semitone pitch changes within triads when spectral complexity was reduced to three or five components per voice, with significantly better performance for three-component compared to nine-component tones. Sensitivity was observed for pitch changes in the high voice or in both high and low voices, but not for changes in only the low voice. Single-voice sensitivity predicted simultaneous-triad sensitivity when controlling for spectral complexity and voice with pitch change. Contrary to expectations, sequential triad presentation did not improve discrimination. An analysis of processor pulse patterns suggests that difference-frequency cues encoded in the temporal envelope rather than place-of-excitation cues underlie perceptual triad sensitivity. These findings support reducing spectral complexity to enhance chord discrimination for CI users based on temporal cues.
Wang, P.; Schoenfeld, M. J.; Maye, A.; Daume, J.; Schneider, T. R.; Engel, A. K.
Show abstract
Predicting the time point when an event will occur is fundamental for adaptive behavior, yet it remains unresolved whether temporal prediction can be influenced by low-frequency rhythmic modulation of sensory stimuli. Here, we tested whether external rhythmic sensory stimulation at a frequency in the delta range (0.5 - 3 Hz) alters performance in a visual temporal prediction task. Participants judged whether a moving visual stimulus reappeared too early or too late after disappearing behind an occluder, while the temporal structure of crossmodal sensory input was manipulated across two behavioral sessions. Results indicated that in the visual-auditory conditions, oscillatory stimulation in either the visual or auditory modality improved performance, whereas decaying sensory intensity over time impaired performance. In visual-tactile conditions, oscillatory visual stimulation also enhanced sensitivity, but rhythmic tactile stimulation did not produce a comparable benefit in performance. Critically, tactile stimulation improved performance only when aligned to the expected disappearance of the visual stimulus, demonstrating that the phase relationship between sensory input and intrinsic delta oscillations is behaviorally relevant. Together, these findings indicate that temporal prediction depends on the temporal structure of sensory input and support the relevance of delta-band oscillations in predictive behavior across and within sensory modalities. Hence, rhythmic modulation of sensory stimuli may provide a tool to enhance temporal prediction accuracy by stimulating oscillatory neural dynamics.
Yavuz, E.; Xu, C.; Liu, W.; Slinn, C.; Mitchell, A.; Ali, J.; Bloom, N.; Khatun, N.; Kirk, P.; Zisch, F.; Tachtsidis, I.; Pinti, P.; Ronca, F.; Patai, Z.; Burgess, P.; Hamilton, A.; Spiers, H.
Show abstract
Orca, wolves, chimpanzees and humans share a similarly impressive capacity for group hunting, where individuals coordinate behaviour together to capture prey. Studying hunting behaviours has important implications for understanding how behaviour in group contexts may be indicative of cognitive decline. Despite growing interest in brain circuits for prey capture, the brain regions involved in tracking prey during a hunt and the behaviours in group hunt linked to success remain unclear. Here we combined functional near infrared spectroscopy (fNIRS) and a virtual minecraft world to examine behaviour, brain dynamics and brain synchrony involved in group hunting behaviour. We focused on the prefrontal cortex (PFC) due to its known role in planning and social coordination and recorded from pairs of individuals as they either cooperated to hunt another person (prey) or simply followed another person. Hunters were more successful if they managed to keep a smaller distance to the prey and moved at speeds that were more synchronised with their co-predator. At high-range frequencies for fNIRS (0.1-0.2Hz), we found greater brain-to-brain synchrony in lateral and medial (frontopolar) PFC regions during hunting compared with chance levels. Together, these findings provide insights into what behaviours and brain dynamics associated with successful group hunting.
Yang, J.; Carter, O.; Shivdasani, M. N.; Grayden, D. B.; Hester, R.; Barutchu, A.
Show abstract
Selective attention enables the prioritization of task-relevant information while managing distractors, and steady-state visual evoked potentials (SSVEPs) are widely used to track this process by tagging different visual objects at distinct flicker frequencies. However, whether the choice of tagging frequency itself influences other neural and cognitive measures remains unclear. Here, 27 participants performed detection and 1-back working memory tasks while a central target and peripheral distractors flickered at either 8.6 Hz or 12 Hz. The working memory task produced slower responses, more errors, and greater perceived difficulty than detection. Tagging frequency strongly shaped neural responses, with 8.6 Hz eliciting higher SSVEP signal-to-noise ratios than 12 Hz regardless of stimulus location. Nevertheless, stronger SSVEP responses for centrally attended stimuli were associated with fewer working memory errors and larger early visual ERP responses, while SSVEPs for attended and distractor stimuli were negatively correlated. In addition, the working memory task produced a larger P1-N1 peak-to-peak difference, and tagging frequency altered the timing and amplitude of early ERP effects. Together, these findings show that tagging frequency is not a neutral methodological parameter, but one that shapes both neural indices of attention and their relationship to cognitive performance.
van Zantwijk, L.; Rolfs, M.; Ohl, S.
Show abstract
When one object approaches another object which, upon touching, moves in the same direction, humans report a vivid impression of one launching the other. Visual adaptation can alter this perception of causality: observers less often report seeing a launch after viewing a stream of launch events. In three experiments, we further characterised how visual adaptation influences the perception of causality by determining the spatial specificity of adaptation and timecourse of recovery from adaptation. In Experiment 1, observers saw ambiguous test events (i.e., the overlap between the two objects varied over trials) at three different horizontal eccentricities. Adaptation was strongest when adaptor and test event were presented at the same eccentricity, and absent when the two were separated by just three degrees of visual angle. Moreover, the perception of causality gradually recovered from adaptation, but remained incomplete. In Experiment 2, both long and short adaptation sequences were highly effective in driving adaptation, and showed no difference in the recovery timecourse, which was complete following more experimental blocks. In Experiment 3, a break without any task-relevant visual input also led to a recovery over the same timespan, but this time, the recovery was instantaneous and incomplete. Altogether, our results provide evidence for highly spatially specific computations, instananeously responding to the onset of adaptation and then gradually recovering from the adaptation over a short time window.
Noerenberg, W.; Schweitzer, R.; Rolfs, M.
Show abstract
Saccadic eye movements sweep the visual scene across the retina, yet the resulting motion is rarely perceived. Visual factors alone, such as the presence of static pre- and post-saccadic images, can attenuate motion perception, suggesting a masking of the motion signal during early visual processing. Here, we isolated the visual component of this reduction in motion perception using simulated saccades presented to fixating observers. Across two experiments, we manipulated motion amplitude (6-18 dva), duration, and velocity profile and measured perceived amplitude and velocity at varying masking durations. Visual masking strongly reduced perceived motion amplitude and velocity, with short halftimes ([~]15 ms) that were largely invariant across saccade amplitudes. Critically, motion following a naturalistic saccadic velocity profile was perceived as smaller and slower than constant-velocity motion matched in amplitude and duration, even without explicit masking. This additional reduction increased with both amplitude and duration. These results show that visual mechanisms alone can account for substantial motion reduction across a large range of amplitudes and demonstrate a partially separable contribution of the saccadic velocity profile, suggesting that the temporal structure of retinal motion itself supports perceptual continuity across eye movements.
Ruffino, C.; Jacquet, T.; Lepers, R.; Papaxanthis, C.; Truong, C.
Show abstract
Mental fatigue is known to impair cognitive and motor performance, but its impact on motor learning remains unclear. This study examined how mental fatigue affects skill acquisition in a sequential finger-tapping task. Twenty-eight participants were assigned to either a mental fatigue group, which completed a thirty-minute Stroop task, or a control group, which watched a documentary of equivalent duration. Both groups then trained on the finger-tapping task across multiple practice blocks with brief rest periods. Overall motor skill improved similarly in both groups. However, mental fatigue altered the pattern of acquisition: participants in the fatigue group showed decreased performance during practice blocks, which was compensated by larger gains during inter-block rest periods. A strong negative correlation was observed between online decrements and offline improvements, indicating that greater declines during practice were associated with larger gains during rest. This study highlights the critical role of rest periods in maintaining learning under cognitively demanding conditions and provides insight into how internal states, such as mental fatigue, can selectively influence the expression of performance without compromising overall learning.
Kerjean, E.; Avargues-Weber, A.; Howard, S.
Show abstract
Despite growing evidence that many animals can evaluate quantities, the ecological relevance of numerical cognition remains debated, particularly outside vertebrates. Would individuals still rely on numerousness if less computationally demanding cues, visual features extracted at the early stage of visual processing, were available to assess quantity? In primates, individuals show a numerical bias as they tend to rely on the number of items rather than non-numerical cues, such as total area, to categorize quantities. In this study, we trained free-flying honeybees to discriminate between two and four items in conditions where numerosity covaried with the total area and perimeter (Experiment Size) or the convex hull (Experiment Space) cues, mimicking ecological contexts. Transfer tests assessed which numerical or non-numerical cues were learned and preferentially used by the bees. Bees primarily relied on numerousness over these non-numerical cues. Individual analyses revealed two consistent strategies: a "numerical bias" strategy, in which bees encoded numerical information while ignoring non-numerical cues, and a "generalist" strategy, where bees flexibly switched between cues and favored non-numerical information when cues conflicted. We further reported improved discrimination when smaller quantities appeared on the left and larger ones on the right, consistent with an oriented mental number line. Together, these findings demonstrate a spontaneous numerical bias in honeybees and reveal that individuals within the same species can adopt distinct strategies when evaluating quantity. Our findings also suggest that distantly related taxa like bees and primates may have independently evolved comparable mechanisms for quantity evaluation.
Grandchamp des Raux, H.; Ghilardi, T.; Ferre, E. R.; Ossmy, O.
Show abstract
A critical aspect of human cognition is the ability to use our knowledge about the laws of physics to make predictions about physical events. Whether this ability is based on abstract processes or is grounded in our body-environment interactions remains an open debate. We used physical reasoning under altered gravity as a model system to show that humans real-time embodied experience modifies their high-level physical reasoning. Specifically, we tested participants in computerised reasoning games, while disrupting their gravitational signalling using Galvanic Vestibular Stimulation (GVS). Participants failed more and had suboptimal strategies under the GVS condition compared to no-GVS in games requiring reasoning about terrestrial gravity. However, the effects of GVS were reduced when the games included reasoning about altered gravity. Our findings demonstrate how the physical experience of the body shifts high-level cognitive skill as reasoning, suggesting that humans mental representation of the world is grounded in adaptable physical mechanisms.
Kim, J.; Lee, S.; Nam, K.
Show abstract
A central question in psycholinguistics in visual word recognition is whether morphologically complex words are obligatorily decomposed into stems and affixes during visual word recognition or whether whole-word access can occur when forms are frequent and familiar. The present study investigated how morphological complexity and lexical frequency jointly shape neural responses by leveraging Korean nominal inflection, whose transparent stem-suffix structure permits a clean dissociation between base (stem) frequency and surface (whole-word) frequency. Twenty-five native Korean speakers completed a rapid event-related fMRI lexical decision task involving simple and inflected nouns that varied parametrically in both frequency measures. Representational similarity analysis (RSA) revealed robust encoding of surface frequency--but not base frequency--in the inferior frontal gyrus (IFG) pars opercularis and supramarginal gyrus (SMG), with significantly stronger correlations for inflected than simple nouns. Univariate analyses converged with this result: surface frequency selectively increased activation for inflected nouns in inferior parietal regions, whereas base frequency showed no reliable effects in any ROI. These findings challenge models positing obligatory pre-lexical decomposition, instead supporting accounts in which morphological processing is shaped by post-lexical, usage-driven lexical statistics. Taken together, our findings shed light on a distributed perspective on morphological processing, suggesting that structural and statistical factors jointly constrain access to morphologically complex forms.
Maracia, B. C. B.; Souza, T. R.; Oliveira, G. S.; Nunes, J. B. P.; dos Santos, C. E. S.; Peixoto, C. B.; Lopes-Silva, J. B.; Nobrega, L. A. O. d. A.; Araujo, P. A. d.; Souza, R. P.; Souza, B. R.
Show abstract
Dance is a core form of human-environment interaction and a powerful medium for emotional expression, yet dancers are routinely exposed to environmental affective cues that may shape their movement. We tested whether a negative emotional context induced immediately before improvisation alters dance biomechanics. Twenty professional dancers performed two 3-min improvised dances. Between dances, they viewed either Neutral or Negatively valenced pictures from the International Affective Picture System (IAPS; 2 min 40 s, 5 s per image). Eye tracking verified attention to the visual stream. Mood was assessed at four time points (PT1-PT4) using the Brazilian Mood Scale (BRAMS), and full-body, three-dimensional kinematics were captured at 300 Hz using a 9-camera optoelectronic system (Qualisys) and processed to measure global movement amplitude and expansion. Negative IAPS exposure increased tension, depression, fatigue, and decreased vigor from PT2 to PT3. Biomechanically, the Negative Stimulus dancers showed a significant reduction in global movement amplitude after negative IAPS exposure, with reduced movement amplitude of the body extremities. In contrast, global movement expansion remained unchanged; that is, the extremities were not positioned closer or farther from the pelvis. Neutral images produced no mood change and no measurable modulation of movement amplitude or expansion. Together, these results support the hypothesis that improvised dance carries biomechanical signatures of the dancers current affective state, beyond the intended expressive content, and provide an automated motion-capture workflow for studying emotion-movement coupling in spontaneous dance. HighlightsNegative visual context shifted dancers mood toward negative affect Negative images reduced movement amplitude in improvised dance Movement expansion remained stable despite mood induction Graphical Abstract O_FIG O_LINKSMALLFIG WIDTH=200 HEIGHT=113 SRC="FIGDIR/small/711707v1_ufig1.gif" ALT="Figure 1"> View larger version (19K): org.highwire.dtl.DTLVardef@aeaacdorg.highwire.dtl.DTLVardef@14f9bf5org.highwire.dtl.DTLVardef@18805fcorg.highwire.dtl.DTLVardef@1411256_HPS_FORMAT_FIGEXP M_FIG C_FIG
King, C. D.; Groh, J. M.
Show abstract
Eye movement-related eardrum oscillations (EMREOs) appear to consist of a pulse of oscillation occurring in conjunction with saccades. However, this apparent pulse could occur either because there is an increase in energy at that frequency at the time of saccades (a true pulse), or because there is saccade-related phase resetting of ongoing energy at that frequency band, thus appearing like a pulse when averaged in the time domain across many trials. Here we conducted a spectral analysis at the individual trial level in humans performing a visually guided saccade task to determine whether the power at the EMREO frequency (30-45 Hz) is higher during saccades than during steady fixation. We found both an increase in sound power in the EMREO frequency band associated with saccades, i.e. sound pulses at the individual trial level, as well as, phase resetting at saccade onset/offset. While both factors contribute to the apparently pulse-like EMREO signal, phase resetting appears to be more prevalent across participants. The prevalence of phase resetting has implications for the underlying mechanism(s) producing EMREOs as well as functional consequences for how the ear might respond to incoming sound in an eye-position dependent fashion.